121 research outputs found

    The positive soundscape project : a synthesis of results from many disciplines

    Get PDF
    This paper takes an overall view of ongoing findings from the Positive Soundscape Project, a large inter-disciplinary soundscapes study which is nearing completion. Qualitative fieldwork (soundwalks and focus groups) and lab-based listening tests have revealed that two key dimensions of the emotional response are calmness and vibrancy. In the lab these factors explain nearly 80% of the variance in listener response. Physiological validation is being sought using fMRI measurements, and these have so far shown significant differences in the response of the brain to affective and neutral soundscapes. A conceptual framework which links the key soundscape components and which could be used for future design is outlined. Metrics are suggested for some perceptual scales and possibilities for soundscape synthesis for design and user engagement are discussed, as are the applications of the results to future research and environmental noise policy

    A Technical Comparison of Digital Frequency-Lowering Algorithms Available in Two Current Hearing Aids

    Get PDF
    Background: Recently two major manufacturers of hearing aids introduced two distinct frequency-lowering techniques that were designed to compensate in part for the perceptual effects of high-frequency hearing impairments. The Widex ‘‘Audibility Extender’ ’ is a linear frequency transposition scheme, whereas the Phonak ‘‘SoundRecover’ ’ scheme employs nonlinear frequency compression. Although these schemes process sound signals in very different ways, studies investigating their use by both adults and children with hearing impairment have reported significant perceptual benefits. However, the modifications that these innovative schemes apply to sound signals have not previously been described or compared in detail. Methods: The main aim of the present study was to analyze these schemes’technical performance by measuring outputs from each type of hearing aid with the frequency-lowering functions enabled and disabled. The input signals included sinusoids, flute sounds, and speech material. Spectral analyses were carried out on the output signals produced by the hearing aids in each condition. Conclusions: The results of the analyses confirmed that each scheme was effective at lowering certain high-frequency acoustic signals, although both techniques also distorted some signals. Most importantly, the application of either frequency-lowering scheme would be expected to improve the audibility of many sounds having salient high-frequenc

    The Frequency Following Response (FFR) May Reflect Pitch-Bearing Information But is Not a Direct Representation of Pitch

    Get PDF
    The frequency following response (FFR), a scalp-recorded measure of phase-locked brainstem activity, is often assumed to reflect the pitch of sounds as perceived by humans. In two experiments, we investigated the characteristics of the FFR evoked by complex tones. FFR waveforms to alternating-polarity stimuli were averaged for each polarity and added, to enhance envelope, or subtracted, to enhance temporal fine structure information. In experiment 1, frequency-shifted complex tones, with all harmonics shifted by the same amount in Hertz, were presented diotically. Only the autocorrelation functions (ACFs) of the subtraction-FFR waveforms showed a peak at a delay shifted in the direction of the expected pitch shifts. This expected pitch shift was also present in the ACFs of the output of an auditory nerve model. In experiment 2, the components of a harmonic complex with harmonic numbers 2, 3, and 4 were presented either to the same ear (“mono”) or the third harmonic was presented contralaterally to the ear receiving the even harmonics (“dichotic”). In the latter case, a pitch corresponding to the missing fundamental was still perceived. Monaural control conditions presenting only the even harmonics (“2 + 4”) or only the third harmonic (“3”) were also tested. Both the subtraction and the addition waveforms showed that (1) the FFR magnitude spectra for “dichotic” were similar to the sum of the spectra for the two monaural control conditions and lacked peaks at the fundamental frequency and other distortion products visible for “mono” and (2) ACFs for “dichotic” were similar to those for “2 + 4” and dissimilar to those for “mono.” The results indicate that the neural responses reflected in the FFR preserve monaural temporal information that may be important for pitch, but provide no evidence for any additional processing over and above that already present in the auditory periphery, and do not directly represent the pitch of dichotic stimuli

    Understanding Pitch Perception as a Hierarchical Process with Top-Down Modulation

    Get PDF
    Pitch is one of the most important features of natural sounds, underlying the perception of melody in music and prosody in speech. However, the temporal dynamics of pitch processing are still poorly understood. Previous studies suggest that the auditory system uses a wide range of time scales to integrate pitch-related information and that the effective integration time is both task- and stimulus-dependent. None of the existing models of pitch processing can account for such task- and stimulus-dependent variations in processing time scales. This study presents an idealized neurocomputational model, which provides a unified account of the multiple time scales observed in pitch perception. The model is evaluated using a range of perceptual studies, which have not previously been accounted for by a single model, and new results from a neurophysiological experiment. In contrast to other approaches, the current model contains a hierarchy of integration stages and uses feedback to adapt the effective time scales of processing at each stage in response to changes in the input stimulus. The model has features in common with a hierarchical generative process and suggests a key role for efferent connections from central to sub-cortical areas in controlling the temporal dynamics of pitch processing

    Combination of Spectral and Binaurally Created Harmonics in a Common Central Pitch Processor

    Get PDF
    A fundamental attribute of human hearing is the ability to extract a residue pitch from harmonic complex sounds such as those produced by musical instruments and the human voice. However, the neural mechanisms that underlie this processing are unclear, as are the locations of these mechanisms in the auditory pathway. The ability to extract a residue pitch corresponding to the fundamental frequency from individual harmonics, even when the fundamental component is absent, has been demonstrated separately for conventional pitches and for Huggins pitch (HP), a stimulus without monaural pitch information. HP is created by presenting the same wideband noise to both ears, except for a narrowband frequency region where the noise is decorrelated across the two ears. The present study investigated whether residue pitch can be derived by combining a component derived solely from binaural interaction (HP) with a spectral component for which no binaural processing is required. Fifteen listeners indicated which of two sequentially presented sounds was higher in pitch. Each sound consisted of two “harmonics,” which independently could be either a spectral or a HP component. Component frequencies were chosen such that the relative pitch judgement revealed whether a residue pitch was heard or not. The results showed that listeners were equally likely to perceive a residue pitch when one component was dichotic and the other was spectral as when the components were both spectral or both dichotic. This suggests that there exists a single mechanism for the derivation of residue pitch from binaurally created components and from spectral components, and that this mechanism operates at or after the level of the dorsal nucleus of the lateral lemniscus (brainstem) or the inferior colliculus (midbrain), which receive inputs from the medial superior olive where temporal information from the two ears is first combined

    Unanesthetized Auditory Cortex Exhibits Multiple Codes for Gaps in Cochlear Implant Pulse Trains

    Get PDF
    Cochlear implant listeners receive auditory stimulation through amplitude-modulated electric pulse trains. Auditory nerve studies in animals demonstrate qualitatively different patterns of firing elicited by low versus high pulse rates, suggesting that stimulus pulse rate might influence the transmission of temporal information through the auditory pathway. We tested in awake guinea pigs the temporal acuity of auditory cortical neurons for gaps in cochlear implant pulse trains. Consistent with results using anesthetized conditions, temporal acuity improved with increasing pulse rates. Unlike the anesthetized condition, however, cortical neurons responded in the awake state to multiple distinct features of the gap-containing pulse trains, with the dominant features varying with stimulus pulse rate. Responses to the onset of the trailing pulse train (Trail-ON) provided the most sensitive gap detection at 1,017 and 4,069 pulse-per-second (pps) rates, particularly for short (25 ms) leading pulse trains. In contrast, under conditions of 254 pps rate and long (200 ms) leading pulse trains, a sizeable fraction of units demonstrated greater temporal acuity in the form of robust responses to the offsets of the leading pulse train (Lead-OFF). Finally, TONIC responses exhibited decrements in firing rate during gaps, but were rarely the most sensitive feature. Unlike results from anesthetized conditions, temporal acuity of the most sensitive units was nearly as sharp for brief as for long leading bursts. The differences in stimulus coding across pulse rates likely originate from pulse rate-dependent variations in adaptation in the auditory nerve. Two marked differences from responses to acoustic stimulation were: first, Trail-ON responses to 4,069 pps trains encoded substantially shorter gaps than have been observed with acoustic stimuli; and second, the Lead-OFF gap coding seen for <15 ms gaps in 254 pps stimuli is not seen in responses to sounds. The current results may help to explain why moderate pulse rates around 1,000 pps are favored by many cochlear implant listeners

    Across-Channel Timing Differences as a Potential Code for the Frequency of Pure Tones

    Get PDF
    When a pure tone or low-numbered harmonic is presented to a listener, the resulting travelling wave in the cochlea slows down at the portion of the basilar membrane (BM) tuned to the input frequency due to the filtering properties of the BM. This slowing is reflected in the phase of the response of neurons across the auditory nerve (AN) array. It has been suggested that the auditory system exploits these across-channel timing differences to encode the pitch of both pure tones and resolved harmonics in complex tones. Here, we report a quantitative analysis of previously published data on the response of guinea pig AN fibres, of a range of characteristic frequencies, to pure tones of different frequencies and levels. We conclude that although the use of across-channel timing cues provides an a priori attractive and plausible means of encoding pitch, many of the most obvious metrics for using that cue produce pitch estimates that are strongly influenced by the overall level and therefore are unlikely to provide a straightforward means for encoding the pitch of pure tones
    corecore